854 research outputs found
Cortical free association dynamics: distinct phases of a latching network
A Potts associative memory network has been proposed as a simplified model of
macroscopic cortical dynamics, in which each Potts unit stands for a patch of
cortex, which can be activated in one of S local attractor states. The internal
neuronal dynamics of the patch is not described by the model, rather it is
subsumed into an effective description in terms of graded Potts units, with
adaptation effects both specific to each attractor state and generic to the
patch. If each unit, or patch, receives effective (tensor) connections from C
other units, the network has been shown to be able to store a large number p of
global patterns, or network attractors, each with a fraction a of the units
active, where the critical load p_c scales roughly like p_c ~ (C S^2)/(a
ln(1/a)) (if the patterns are randomly correlated). Interestingly, after
retrieving an externally cued attractor, the network can continue jumping, or
latching, from attractor to attractor, driven by adaptation effects. The
occurrence and duration of latching dynamics is found through simulations to
depend critically on the strength of local attractor states, expressed in the
Potts model by a parameter w. Here we describe with simulations and then
analytically the boundaries between distinct phases of no latching, of
transient and sustained latching, deriving a phase diagram in the plane w-T,
where T parametrizes thermal noise effects. Implications for real cortical
dynamics are briefly reviewed in the conclusions
Adaptive self-organization in a realistic neural network model
Information processing in complex systems is often found to be maximally
efficient close to critical states associated with phase transitions. It is
therefore conceivable that also neural information processing operates close to
criticality. This is further supported by the observation of power-law
distributions, which are a hallmark of phase transitions. An important open
question is how neural networks could remain close to a critical point while
undergoing a continual change in the course of development, adaptation,
learning, and more. An influential contribution was made by Bornholdt and
Rohlf, introducing a generic mechanism of robust self-organized criticality in
adaptive networks. Here, we address the question whether this mechanism is
relevant for real neural networks. We show in a realistic model that
spike-time-dependent synaptic plasticity can self-organize neural networks
robustly toward criticality. Our model reproduces several empirical
observations and makes testable predictions on the distribution of synaptic
strength, relating them to the critical state of the network. These results
suggest that the interplay between dynamics and topology may be essential for
neural information processing.Comment: 6 pages, 4 figure
The Ising Model for Neural Data: Model Quality and Approximate Methods for Extracting Functional Connectivity
We study pairwise Ising models for describing the statistics of multi-neuron
spike trains, using data from a simulated cortical network. We explore
efficient ways of finding the optimal couplings in these models and examine
their statistical properties. To do this, we extract the optimal couplings for
subsets of size up to 200 neurons, essentially exactly, using Boltzmann
learning. We then study the quality of several approximate methods for finding
the couplings by comparing their results with those found from Boltzmann
learning. Two of these methods- inversion of the TAP equations and an
approximation proposed by Sessak and Monasson- are remarkably accurate. Using
these approximations for larger subsets of neurons, we find that extracting
couplings using data from a subset smaller than the full network tends
systematically to overestimate their magnitude. This effect is described
qualitatively by infinite-range spin glass theory for the normal phase. We also
show that a globally-correlated input to the neurons in the network lead to a
small increase in the average coupling. However, the pair-to-pair variation of
the couplings is much larger than this and reflects intrinsic properties of the
network. Finally, we study the quality of these models by comparing their
entropies with that of the data. We find that they perform well for small
subsets of the neurons in the network, but the fit quality starts to
deteriorate as the subset size grows, signalling the need to include higher
order correlations to describe the statistics of large networks.Comment: 12 pages, 10 figure
Neural superposition and oscillations in the eye of the blowfly
Neural superposition in the eye of the blowfly Calliphora erythrocephala was investigated by stimulating single photoreceptors using corneal neutralization through water immersion. Responses in Large Monopolar Cells (LMCs) in the lamina were measured, while stimulating one or more of the six photoreceptors connected to the LMC. Responses to flashes of low light intensity on individual photoreceptors add approximately linearly at the LMC. Higher intensity light flashes produce a maximum LMC response to illumination of single photoreceptors which is about half the maximum response to simultaneous illumination of the six connecting photoreceptors. This observation indicates that a saturation can occur at a stage of synaptic transmission which precedes the change in the post-synaptic membrane potential.
Stimulation of single photoreceptors yields high frequency oscillations (about 200 Hz) in the LMC potential, much larger in amplitude than produced by simultaneous stimulation of the six photoreceptors connected to the LMC. It is discussed that these oscillations also arise from a mechanism that precedes the change in the postsynaptic membrane potential.
A Complex Network Approach to Topographical Connections
The neuronal networks in the mammals cortex are characterized by the
coexistence of hierarchy, modularity, short and long range interactions,
spatial correlations, and topographical connections. Particularly interesting,
the latter type of organization implies special demands on the evolutionary and
ontogenetic systems in order to achieve precise maps preserving spatial
adjacencies, even at the expense of isometry. Although object of intensive
biological research, the elucidation of the main anatomic-functional purposes
of the ubiquitous topographical connections in the mammals brain remains an
elusive issue. The present work reports on how recent results from complex
network formalism can be used to quantify and model the effect of topographical
connections between neuronal cells over a number of relevant network properties
such as connectivity, adjacency, and information broadcasting. While the
topographical mapping between two cortical modules are achieved by connecting
nearest cells from each module, three kinds of network models are adopted for
implementing intracortical connections (ICC), including random,
preferential-attachment, and short-range networks. It is shown that, though
spatially uniform and simple, topographical connections between modules can
lead to major changes in the network properties, fostering more effective
intercommunication between the involved neuronal cells and modules. The
possible implications of such effects on cortical operation are discussed.Comment: 5 pages, 5 figure
Model of Low-pass Filtering of Local Field Potentials in Brain Tissue
Local field potentials (LFPs) are routinely measured experimentally in brain
tissue, and exhibit strong low-pass frequency filtering properties, with high
frequencies (such as action potentials) being visible only at very short
distances (10~) from the recording electrode. Understanding
this filtering is crucial to relate LFP signals with neuronal activity, but not
much is known about the exact mechanisms underlying this low-pass filtering. In
this paper, we investigate a possible biophysical mechanism for the low-pass
filtering properties of LFPs. We investigate the propagation of electric fields
and its frequency dependence close to the current source, i.e. at length scales
in the order of average interneuronal distance. We take into account the
presence of a high density of cellular membranes around current sources, such
as glial cells. By considering them as passive cells, we show that under the
influence of the electric source field, they respond by polarisation, i.e.,
creation of an induced field. Because of the finite velocity of ionic charge
movement, this polarization will not be instantaneous. Consequently, the
induced electric field will be frequency-dependent, and much reduced for high
frequencies. Our model establishes that with respect to frequency attenuation
properties, this situation is analogous to an equivalent RC-circuit, or better
a system of coupled RC-circuits. We present a number of numerical simulations
of induced electric field for biologically realistic values of parameters, and
show this frequency filtering effect as well as the attenuation of
extracellular potentials with distance. We suggest that induced electric fields
in passive cells surrounding neurons is the physical origin of frequency
filtering properties of LFPs.Comment: 10 figs, revised tex file and revised fig
Nonlocal mechanism for cluster synchronization in neural circuits
The interplay between the topology of cortical circuits and synchronized
activity modes in distinct cortical areas is a key enigma in neuroscience. We
present a new nonlocal mechanism governing the periodic activity mode: the
greatest common divisor (GCD) of network loops. For a stimulus to one node, the
network splits into GCD-clusters in which cluster neurons are in zero-lag
synchronization. For complex external stimuli, the number of clusters can be
any common divisor. The synchronized mode and the transients to synchronization
pinpoint the type of external stimuli. The findings, supported by an
information mixing argument and simulations of Hodgkin Huxley population
dynamic networks with unidirectional connectivity and synaptic noise, call for
reexamining sources of correlated activity in cortex and shorter information
processing time scales.Comment: 8 pges, 6 figure
Processing of information in synchroneously firing chains in networks of neurons
The Abeles model of cortical activity assumes that in absence of stimulation neural activity in zero order can be described by a Poisson process. Here the model is extended to describe information processing by synfire chains within a network of activity uncorrelated to the synfire chain. A quantitative derivation of the transfer function from this concept is given
Beyond persons: extending the personal / subpersonal distinction to non-rational animals and artificial agents
The distinction between personal level explanations and subpersonal ones has been subject to much debate in philosophy. We understand it as one between explanations that focus on an agent’s interaction with its environment, and explanations that focus on the physical or computational enabling conditions of such an interaction. The distinction, understood this way, is necessary for a complete account of any agent, rational or not, biological or artificial. In particular, we review some recent research in Artificial Life that pretends to do completely without the distinction, while using agent-centered concepts all the way. It is argued that the rejection of agent level explanations in favour of mechanistic ones is due to an unmotivated need to choose among representationalism and eliminativism. The dilemma is a false one if the possibility of a radical form of externalism is considered
- …